<p>Prior to Apache 1.3 the <code class="directive"><a href="../mod/prefork.html#minspareservers">MinSpareServers</a></code>, <code class="directive"><a href="../mod/prefork.html#maxspareservers">MaxSpareServers</a></code>, and <code class="directive"><a href="../mod/mpm_common.html#startservers">StartServers</a></code> settings all had drastic effects on
benchmark results. In particular, Apache required a "ramp-up"
period in order to reach a number of children sufficient to serve
the load being applied. After the initial spawning of
setting. So a server being accessed by 100 simultaneous
clients, using the default <code class="directive"><a href="../mod/mpm_common.html#startservers">StartServers</a></code> of <code>5</code> would take on
the order 95 seconds to spawn enough children to handle
the load. This works fine in practice on real-life servers,
because they aren't restarted frequently. But does really
poorly on benchmarks which might only run for ten minutes.</p>
<p>The one-per-second rule was implemented in an effort to
avoid swamping the machine with the startup of new children. If
the machine is busy spawning children it can't service
requests. But it has such a drastic effect on the perceived
performance of Apache that it had to be replaced. As of Apache
1.3, the code will relax the one-per-second rule. It will spawn
one, wait a second, then spawn two, wait a second, then spawn
four, and it will continue exponentially until it is spawning
32 children per second. It will stop whenever it satisfies the
<p>This appears to be responsive enough that it's almost
unnecessary to twiddle the <code class="directive"><a href="../mod/prefork.html#minspareservers">MinSpareServers</a></code>, <code class="directive"><a href="../mod/prefork.html#maxspareservers">MaxSpareServers</a></code> and <code class="directive"><a href="../mod/mpm_common.html#startservers">StartServers</a></code> knobs. When more than 4 children are
spawned per second, a message will be emitted to the
<code class="directive"><a href="../mod/core.html#errorlog">ErrorLog</a></code>. If you
see a lot of these errors then consider tuning these settings.
Use the <code class="module"><a href="../mod/mod_status.html">mod_status</a></code> output as a guide.</p>
<p>Related to process creation is process death induced by the
<code class="module"><a href="../mod/mpmt_os2.html">mpmt_os2</a></code>, and <code class="module"><a href="../mod/mpm_winnt.html">mpm_winnt</a></code>. For
general Unix-type systems, there are several MPMs from which
to choose. The choice of MPM can affect the speed and scalability
<p>Since memory usage is such an important consideration in
performance, you should attempt to eliminate modules that youare
not actually using. If you have built the modules as <a href="../dso.html">DSOs</a>, eliminating modules is a simple
matter of commenting out the associated <code class="directive"><a href="../mod/mod_so.html#loadmodule">LoadModule</a></code> directive for that module.
This allows you to experiment with removing modules, and seeing
if your site still functions in their absense.</p>
<p>If, on the other hand, you have modules statically linked
into your Apache binary, you will need to recompile Apache in
order to remove unwanted modules.</p>
<p>An associated question that arises here is, of course, what
modules you need, and which ones you don't. The answer here
will, of course, vary from one web site to another. However, the
<em>minimal</em> list of modules which you can get by with tends
to include <code class="module"><a href="../mod/mod_mime.html">mod_mime</a></code>, <code class="module"><a href="../mod/mod_dir.html">mod_dir</a></code>,
and <code class="module"><a href="../mod/mod_log_config.html">mod_log_config</a></code>. <code>mod_log_config</code> is,
of course, optional, as you can run a web site without log
files. This is, however, not recommended.</p>
<h3>Atomic Operations</h3>
<p>Some modules, such as <code class="module"><a href="../mod/mod_cache.html">mod_cache</a></code> and
recent development builds of the worker MPM, use APR's
atomic API. This API provides atomic operations that can
be used for lightweight thread synchronization.</p>
<p>By default, APR implements these operations using the
most efficient mechanism available on each target
OS/CPU platform. Many modern CPUs, for example, have
an instruction that does an atomic compare-and-swap (CAS)
operation in hardware. On some platforms, however, APR
defaults to a slower, mutex-based implementation of the
atomic API in order to ensure compatibility with older
CPU models that lack such instructions. If you are
building Apache for one of these platforms, and you plan
to run only on newer CPUs, you can select a faster atomic
implementation at build time by configuring Apache with
the <code>--enable-nonportable-atomics</code> option:</p>
depending on your operating system), and (pre-1.3) several
extra calls to <code>time(2)</code>. This is all done so that
the status report contains timing indications. For highest
performance, set <code>ExtendedStatus off</code> (which is the
default).</p>
<h3>accept Serialization - multiple sockets</h3>
<div class="warning"><h3>Warning:</h3>
<p>This section has not been fully updated
to take into account changes made in the 2.0 version of the
Apache HTTP Server. Some of the information may still be
relevant, but please use it with care.</p>
</div>
<p>This discusses a shortcoming in the Unix socket API. Suppose
your web server uses multiple <code class="directive"><a href="../mod/mpm_common.html#listen">Listen</a></code> statements to listen on either multiple
ports or multiple addresses. In order to test each socket
to see if a connection is ready Apache uses
<code>select(2)</code>. <code>select(2)</code> indicates that a
socket has <em>zero</em> or <em>at least one</em> connection
waiting on it. Apache's model includes multiple children, and
all the idle ones test for new connections at the same time. A
naive implementation looks something like this (these examples
do not match the code, they're contrived for pedagogical
purposes):</p>
<div class="example"><p><code>
for (;;) {<br />
<span class="indent">
for (;;) {<br />
<span class="indent">
fd_set accept_fds;<br />
<br />
FD_ZERO (&accept_fds);<br />
for (i = first_socket; i <= last_socket; ++i) {<br />
<p>Meanwhile, the listener thread is able to accept another connection
as soon as it has dispatched this connection to a worker thread (subject
to some flow-control logic in the worker MPM that throttles the listener
if all the available workers are busy). Though it isn't apparent from
this trace, the next <code>accept(2)</code> can (and usually does, under
high load conditions) occur in parallel with the worker thread's handling
of the just-accepted connection.</p>
</div></div>
<div class="bottomlang">
<p><span>Available Languages: </span><a href="../en/misc/perf-tuning.html" title="English"> en </a> |
<a href="../ko/misc/perf-tuning.html" hreflang="ko" rel="alternate" title="Korean"> ko </a></p>
</div><div id="footer">
<p class="apache">Copyright 1995-2006 The Apache Software Foundation or its licensors, as applicable.<br />Licensed under the <a href="http://www.apache.org/licenses/LICENSE-2.0">Apache License, Version 2.0</a>.</p>